17,870 research outputs found

    Experience-weighted Attraction Learning in Normal Form Games

    Get PDF
    In ‘experience-weighted attraction’ (EWA) learning, strategies have attractions that reflect initial predispositions, are updated based on payoff experience, and determine choice probabilities according to some rule (e.g., logit). A key feature is a parameter δ that weights the strength of hypothetical reinforcement of strategies that were not chosen according to the payoff they would have yielded, relative to reinforcement of chosen strategies according to received payoffs. The other key features are two discount rates, φ and ρ, which separately discount previous attractions, and an experience weight. EWA includes reinforcement learning and weighted fictitious play (belief learning) as special cases, and hybridizes their key elements. When δ= 0 and ρ= 0, cumulative choice reinforcement results. When δ= 1 and ρ=φ, levels of reinforcement of strategies are exactly the same as expected payoffs given weighted fictitious play beliefs. Using three sets of experimental data, parameter estimates of the model were calibrated on part of the data and used to predict a holdout sample. Estimates of δ are generally around .50, φ around .8 − 1, and ρ varies from 0 to φ. Reinforcement and belief-learning special cases are generally rejected in favor of EWA, though belief models do better in some constant-sum games. EWA is able to combine the best features of previous approaches, allowing attractions to begin and grow flexibly as choice reinforcement does, but reinforcing unchosen strategies substantially as belief-based models implicitly do

    Iterated dominance and iterated best response in experimental "p-beauty contests"

    Get PDF
    Picture a thin country 1000 miles long, running north and south, like Chile. Several natural attractions are located at the northern tip of the country. Suppose each of n resort developers plans to locate a resort somewhere on the country's coast (and all spots are equally attractive). After all the resort locations are chosen, an airport will be built to serve tourists, at the average of all the locations including the natural attractions. Suppose most tourists visit all the resorts equally often, except for lazy tourists who visit only the resort closest to the airport; so the developer who locates closest to the airport gets a fixed bonus of extra visitors. Where should the developer locate to be nearest to the airport? The surprising game-theoretic answer is that all the developers should locate exactly where the natural attractions are. This answer requires at least one natural attraction at the northern tip, but does not depend on the fraction of lazy tourists or the number of developers (as long as there is more than one)

    A cognitive hierarchy model of games

    Get PDF
    Players in a game are “in equilibrium” if they are rational, and accurately predict other players' strategies. In many experiments, however, players are not in equilibrium. An alternative is “cognitive hierarchy” (CH) theory, where each player assumes that his strategy is the most sophisticated. The CH model has inductively defined strategic categories: step 0 players randomize; and step k thinkers best-respond, assuming that other players are distributed over step 0 through step k − 1. This model fits empirical data, and explains why equilibrium theory predicts behavior well in some games and poorly in others. An average of 1.5 steps fits data from many games

    A cognitive hierarchy theory of one-shot games: Some preliminary results

    Get PDF
    Strategic thinking, best-response, and mutual consistency (equilibrium) are three key modelling principles in noncooperative game theory. This paper relaxes mutual consistency to predict how players are likely to behave in in one-shot games before they can learn to equilibrate. We introduce a one-parameter cognitive hierarchy (CH) model to predict behavior in one-shot games, and initial conditions in repeated games. The CH approach assumes that players use k steps of reasoning with frequency f (k). Zero-step players randomize. Players using k (≥ 1) steps best respond given partially rational expectations about what players doing 0 through k - 1 steps actually choose. A simple axiom which expresses the intuition that steps of thinking are increasingly constrained by working memory, implies that f (k) has a Poisson distribution (characterized by a mean number of thinking steps τ ). The CH model converges to dominance-solvable equilibria when τ is large, predicts monotonic entry in binary entry games for τ < 1:25, and predicts effects of group size which are not predicted by Nash equilibrium. Best-fitting values of τ have an interquartile range of (.98,2.40) and a median of 1.65 across 80 experimental samples of matrix games, entry games, mixed-equilibrium games, and dominance-solvable p-beauty contests. The CH model also has economic value because subjects would have raised their earnings substantially if they had best-responded to model forecasts instead of making the choices they did

    Challenge on the Astrophysical R-process Calculation with Nuclear Mass Models

    Full text link
    Our understanding of the rapid neutron capture nucleosynthesis process in universe depends on the reliability of nuclear mass predictions. Initiated by the newly developed mass table in the relativistic mean field theory (RMF), in this paper the influence of mass models on the rr-process calculations is investigated assuming the same astrophysical conditions. The different model predictions on the so far unreachable nuclei lead to significant deviations in the calculated r-process abundances.Comment: 3 pages, 3 figure
    corecore